Goto

Collaborating Authors

 task scheduler






Resource Utilization Optimized Federated Learning

Zhang, Zihan, Wong, Leon, Varghese, Blesson

arXiv.org Artificial Intelligence

Zihan Zhang University of St Andrews, UK Leon Wong Rakuten Mobile, Inc., Japan Blesson V arghese University of St Andrews, UK Abstract --Federated learning (FL) systems facilitate distributed machine learning across a server and multiple devices. However, FL systems have low resource utilization limiting their practical use in the real world. This inefficiency primarily arises from two types of idle time: (i) task dependency between the server and devices, and (ii) stragglers among heterogeneous devices. This paper introduces FedOptima, a resource-optimized FL system designed to simultaneously minimize both types of idle time; existing systems do not eliminate or reduce both at the same time. First, devices operate independently of each other using asynchronous aggregation to eliminate straggler effects, and independently of the server by utilizing auxiliary networks to minimize idle time caused by task dependency. Second, the server performs centralized training using a task scheduler that ensures balanced contributions from all devices, improving model accuracy. Four state-of-the-art offloading-based and asynchronous FL methods are chosen as baselines. Experimental results show that compared to the best results of the baselines on convolutional neural networks and transformers on multiple lab-based testbeds, FedOptima (i) achieves higher or comparable accuracy, (ii) accelerates training by 1.9 to 21.8, (iii) reduces server and device idle time by up to 93.9% and 81.8%, respectively, and (iv) increases throughput by 1.1 to 2.0 . Index T erms--federated learning, distributed system, resource utilization, idle time, edge computing I. I NTRODUCTION Federated learning (FL) [1]-[3] offers distributed training across user devices as an alternative to traditional centralized machine training. Devices train a deep neural network (DNN) on their data and send model parameters to the server. The server aggregates these into a global model, which is then distributed to the devices for the next round. Thus, FL utilizes insight from user data via local models to train a global model without sharing original data. Sub-optimal resource utilization is a critical problem in FL that results in two types of idle time on the server and devices (see Section II-A). The first is due to task dependency between server and devices - the server is idle for considerable periods when aggregating local models from devices as it waits for on-device training to complete, which is usually time-consuming. The second is due to hardware heterogeneity - stragglers or slower devices require more time to train than faster devices that idle while waiting for the stragglers. Two categories of methods are considered in the existing literature for reducing idle time.


Strict Partitioning for Sporadic Rigid Gang Tasks

Sun, Binqi, Kloda, Tomasz, Caccamo, Marco

arXiv.org Artificial Intelligence

The rigid gang task model is based on the idea of executing multiple threads simultaneously on a fixed number of processors to increase efficiency and performance. Although there is extensive literature on global rigid gang scheduling, partitioned approaches have several practical advantages (e.g., task isolation and reduced scheduling overheads). In this paper, we propose a new partitioned scheduling strategy for rigid gang tasks, named strict partitioning. The method creates disjoint partitions of tasks and processors to avoid inter-partition interference. Moreover, it tries to assign tasks with similar volumes (i.e., parallelisms) to the same partition so that the intra-partition interference can be reduced. Within each partition, the tasks can be scheduled using any type of scheduler, which allows the use of a less pessimistic schedulability test. Extensive synthetic experiments and a case study based on Edge TPU benchmarks show that strict partitioning achieves better schedulability performance than state-of-the-art global gang schedulability analyses for both preemptive and non-preemptive rigid gang task sets.


On the Statistical Benefits of Curriculum Learning

Xu, Ziping, Tewari, Ambuj

arXiv.org Machine Learning

Curriculum learning (CL) is a commonly used machine learning training strategy. However, we still lack a clear theoretical understanding of CL's benefits. In this paper, we study the benefits of CL in the multitask linear regression problem under both structured and unstructured settings. For both settings, we derive the minimax rates for CL with the oracle that provides the optimal curriculum and without the oracle, where the agent has to adaptively learn a good curriculum. Our results reveal that adaptive learning can be fundamentally harder than the oracle learning in the unstructured setting, but it merely introduces a small extra term in the structured setting. To connect theory with practice, we provide justification for a popular empirical method that selects tasks with highest local prediction gain by comparing its guarantees with the minimax rates mentioned above.


Ansor : Generating High-Performance Tensor Programs for Deep Learning

Zheng, Lianmin, Jia, Chengfan, Sun, Minmin, Wu, Zhao, Yu, Cody Hao, Haj-Ali, Ameer, Wang, Yida, Yang, Jun, Zhuo, Danyang, Sen, Koushik, Gonzalez, Joseph E., Stoica, Ion

arXiv.org Machine Learning

High-performance tensor programs are crucial to guarantee efficient execution of deep neural networks. However, obtaining performant tensor programs for different operators on various hardware platforms is notoriously challenging. Currently, deep learning systems rely on vendor-provided kernel libraries or various search strategies to get performant tensor programs. These approaches either require significant engineering effort to develop platform-specific optimization code or fall short of finding high-performance programs due to restricted search space and ineffective exploration strategy. We present Ansor, a tensor program generation framework for deep learning applications. Compared with existing search strategies, Ansor explores many more optimization combinations by sampling programs from a hierarchical representation of the search space. Ansor then fine-tunes the sampled programs with evolutionary search and a learned cost model to identify the best programs. Ansor can find high-performance programs that are outside the search space of existing state-of-the-art approaches. In addition, Ansor utilizes a task scheduler to simultaneously optimize multiple subgraphs in deep neural networks. We show that Ansor improves the execution performance of deep neural networks relative to the state-of-the-art on the Intel CPU, ARM CPU, and NVIDIA GPU by up to $3.8\times$, $2.6\times$, and $1.7\times$, respectively.


Predictive CPU isolation of containers at Netflix

#artificialintelligence

We've all had noisy neighbors at one point in our life. Whether it's at a cafe or through a wall of an apartment, it is always disruptive. The need for good manners in shared spaces turns out to be important not just for people, but for your Docker containers too. When you're running in the cloud your containers are in a shared space; in particular they share the CPU's memory hierarchy of the host instance. Because microprocessors are so fast, computer architecture design has evolved towards adding various levels of caching between compute units and the main memory, in order to hide the latency of bringing the bits to the brains.


Nvidia GPUs for data science, analytics, and distributed machine learning using Python with Dask ZDNet

#artificialintelligence

Nvidia has been more than a hardware company for a long time. As its GPUs are broadly used to run machine learning workloads, machine learning has become a key priority for Nvidia. In its GTC event this week, Nvidia made a number of related points, aiming to build on machine learning and extend to data science and analytics. Nvidia wants to "couple software and hardware to deliver the advances in computing power needed to transform data into insights and intelligence." Jensen Huang, Nvidia CEO, emphasized the collaborative aspect between chip architecture, systems, algorithms and applications.